teaching machine
ViTs: Teaching Machines to See Time Series Anomalies Like Human Experts
Wang, Zexin, Pei, Changhua, Liu, Yang, Jiang, Hengyue, Zhou, Quan, Si, Haotian, Cui, Hang, Li, Jianhui, Xie, Gaogang, Li, Jingjing, Pei, Dan
Web service administrators must ensure the stability of multiple systems by promptly detecting anomalies in Key Performance Indicators (KPIs). Achieving the goal of "train once, infer across scenarios" remains a fundamental challenge for time series anomaly detection models. Beyond improving zero-shot generalization, such models must also flexibly handle sequences of varying lengths during inference, ranging from one hour to one week, without retraining. Conventional approaches rely on sliding-window encoding and self-supervised learning, which restrict inference to fixed-length inputs. Large Language Models (LLMs) have demonstrated remarkable zero-shot capabilities across general domains. However, when applied to time series data, they face inherent limitations due to context length. To address this issue, we propose ViTs, a Vision-Language Model (VLM)-based framework that converts time series curves into visual representations. By rescaling time series images, temporal dependencies are preserved while maintaining a consistent input size, thereby enabling efficient processing of arbitrarily long sequences without context constraints. Training VLMs for this purpose introduces unique challenges, primarily due to the scarcity of aligned time series image-text data. To overcome this, we employ an evolutionary algorithm to automatically generate thousands of high-quality image-text pairs and design a three-stage training pipeline consisting of: (1) time series knowledge injection, (2) anomaly detection enhancement, and (3) anomaly reasoning refinement. Extensive experiments demonstrate that ViTs substantially enhance the ability of VLMs to understand and detect anomalies in time series data. All datasets and code will be publicly released at: https://anonymous.4open.science/r/ViTs-C484/.
- Asia > China > Beijing > Beijing (0.41)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.05)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- (2 more...)
- North America > United States > Florida > Orange County > Orlando (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Teaching Machines to Read and Comprehend
Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.
Reviews: Teaching Machines to Describe Images with Natural Language Feedback
The paper presents an approach for automatically captioning images where the model also incorporates natural language feedback from humans along with ground truth captions during training. The proposed approach uses reinforcement learning to train a phrase based captioning model where the model is first trained using maximum likelihood training (supervised learning) and then further finetuned using reinforcement learning where the reward is weighted sum of BLEU scores w.r.t to the ground truth and the feedback sentences provided by humans. The reward also consists of phrase level rewards obtained by using the human feedback. The proposed model is trained and evaluated on MSCOCO image caption data. The proposed model is compared with a pure supervised learning (SL) model, a model trained using reinforcement learning (RL) without any feedback.
Teaching Machines to Read and Comprehend Karl Moritz Hermann Tomáš Kočiský Edward Grefenstette
Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.
- North America > United States > Florida > Orange County > Orlando (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play: Foster, David: 9781492041948: Amazon.com: Books
This book covers the key techniques that have dominated the generative modeling landscape in recent years and have allowed us to make impressive progress in creative tasks. As well as covering core generative modeling theory, we will be building full working examples of some of the key models from the literature and walking through the codebase for each, step by step. Throughout the book, you will find short, allegorical stories that help explain the mechanics of some of the models we will be building. I believe that one of the best ways to teach a new abstract theory is to first convert it into something that isn't quite so abstract, such as a story, before diving into the technical explanation. The individual steps of the theory are clearer within this context because they involve people, actions, and emotions, all of which are well understood, rather than abstract constructs such as neural networks, backpropagation, and loss functions.
Expanding artistic frontiers in artificial intelligence
Dr. Mohammed Elhoseiny, assistant professor of computer science at KAUST, has carved a career out of teaching machines the art of creating art. After finishing his doctoral degree at Rutgers University in 2016, Elhoseiny went on to work for Adobe Research, Baidu Research, Facebook and now KAUST. His latest research paper, Creative Walk Adversarial Networks: Novel Art Generation with Probabilistic Random Walk Deviation from Style Norms, was accepted at the premiere conference on computational creative artificial intelligence (AI), the International Conference on Computational Creativity (ICCC) 2022. The paper covers the work of Elhoseiny and his team VISION CAIR on the use of Creative Walk Adversarial Networks (CWAN) for novel, or original, art generation. CWAN learns about existing art styles in its training by being exposed to a large repository of paintings from various art movements and styles, from 5000 years ago to present times.
Teaching Machines to Think Like Us
Can intelligence be taught to robots? Advances in physical reservoir computing, a technology that makes sense of brain signals, could contribute to creating artificial intelligence machines that think like us. In Applied Physics Letters, from AIP Publishing, researchers from the University of Tokyo outline how a robot could be taught to navigate through a maze by electrically stimulating a culture of brain nerve cells connected to the machine. These nerve cells, or neurons, were grown from living cells and acted as the physical reservoir for the computer to construct coherent signals. The signals are regarded as homeostatic signals, telling the robot the internal environment was being maintained within a certain range and acting as a baseline as it moved freely through the maze.
WHY IT'S TIME TO UPDATE ELEARNING WITH ADAPTIVE LEARNING AND PERSONALIZATION – Performance Development Group
It's a Teaching Machine and it will dramatically change the way people learn in the future." You do not need a DeLorean with a flux capacitor to see how far Learning has come in the last seventy years. Let's travel back in time to the 1950s when cars had tail fins, sock hops were in full swing, and hanging out in malt shops was the Bee's knees. Surprisingly, the beginning of technology-assisted adaptive learning was also a product of the '50s. In 1954, BF Skinner came up with the idea of the "Teaching Machine" to accommodate the variable learning rates and attention spans of students.
- Education (0.52)
- Health & Medicine (0.32)